Developing Mapping and Evaluation Techniques for Textual Case-Based Reasoning
نویسندگان
چکیده
Textual Case-Based Reasoning (CBR) is not simply Information Retrieval (IR) of text documents which happen also to be cases. Nor does it involve only techniques for automatically determining what cases represented as texts are about or techniques for automatically indexing such cases under relevant features. Textual CBR is still case-based reasoning, and for us, that means drawing inferences about problem situations by comparing them to past decided cases. This specific property distinguishes textual CBR from other techniques such as IR. In other words, we believe that (1) textual CBR should also involve drawing inferences by computationally comparing cases expressed as texts. More accurately, given the technical difficulties of computationally comparing case texts, we take this as our working hypothesis. We believe it is worth investing a great deal of effort to confirm this hypothesis before we are willing to accept a watered-down conception of what textual CBR should be about. At the same time, we do not minimize the difficulties of computationally comparing case texts. While cases may be compared computationally in terms of numerical feature weights, symbolic arguments, or adaptations, arranging for cases described as texts to be compared in these ways presents many research questions. Chief among the problems to be solved is mapping text to comparison-enabling structures. We maintain that (2) to be successful, textual CBR will require textual descriptions of cases to be mapped onto structural representations which facilitate computationally comparing cases. It is important to realize that textual CBR does not avoid the need for structured case comparisons. While it may be true, as the Call for Participation asserts, that "many CBR applications now require the handling of only semi-structured or even full-text cases rather than the highly structured cases of more traditional CBR systems", textual CBR cannot dispense with structure in so far as it is necessary to support case comparison. In this paper, we elaborate one such structure called "factors" which facilitates comparing problems and cases. Our experiments in automatically classifying cases expressed as texts are aimed at identifying the factors implicit in the textual case descriptions. Evaluation is a crucial step in any research project, and evaluating textual CBR systems is particularly challenging. We maintain that (3) an evaluation of textual CBR has to be designed carefully to disentangle the aspects to be assessed from other aspects of what is inevitably a complex system and to adopt appropriate standards for comparing system performance. Moreover, evaluation criteria for measuring success must augment those commonly applied to assess information retrieval of textual documents. The widelyused IR measures precision/recall do not capture all of the information required for assessing the performance of a textual CBR system. We develop this position on evaluation in a separate position paper.
منابع مشابه
Extending jCOLIBRI for Textual CBR
This paper summarises our work in textual Case-Based Reasoning within jCOLIBRI. We use Information Extraction techniques to annotate web pages to facilitate semantic retrieval over the web. Similarity matching techniques from CBR are applied to retrieve from these annotated pages. We demonstrate the applicability of these extensions by annotating and retrieving documents on the web.
متن کاملEvaluation of Textual CBR Approaches
Evaluation is a crucial step in a research project, it demonstrates how well the chosen approach and the implemented techniques work, and can uncover limitations as well as point toward improvements and future research. A formal evaluation also facilitates comparing the project to previous work, and enables other researchers to assess its usefulness to their problems. Evaluating Textual CBR sys...
متن کاملApplying Case-Based Reasoning to Email Response
In this paper, we describe a case-based reasoning approach for the semi-automatic generation of responses to email messages. This task poses some challenges from a case-based reasoning perspective especially to the precision of the retrieval phase and the adaptation of textual cases. We are currently developing an application for the Investor relations domain. This paper discusses how some of t...
متن کاملApplying Machine Translation Evaluation Techniques to Textual CBR
The need for automated text evaluation is common to several AI disciplines. In this work, we explore the use of Machine Translation (MT) evaluation metrics for Textual Case Based Reasoning (TCBR). MT and TCBR typically propose textual solutions and both rely on human reference texts for evaluation purposes. Current TCBR evaluation metrics such as precision and recall employ a single human refer...
متن کاملTextual Cbr and Information Retrieval { a Comparison {
In recent years, quite a number of projects started to apply case-based reasoning technology to textual documents instead of highly structured cases. For this the term Textual CBR has been coined. In this paper, we give an overview over the main ideas of Textual CBR and compare it with Information Retrieval techniques. We also present some preliminary results obtained from three projects perfor...
متن کامل